2,375 research outputs found

    Emergence of heat extremes attributable to anthropogenic influences

    Get PDF
    Climate scientists have demonstrated that a substantial fraction of the probability of numerous recent extreme events may be attributed to human-induced climate change. However, it is likely that for temperature extremes occurring over previous decades a fraction of their probability was attributable to anthropogenic influences. We identify the first record-breaking warm summers and years for which a discernible contribution can be attributed to human influence. We find a significant human contribution to the probability of record-breaking global temperature events as early as the 1930s. Since then, all the last 16 record-breaking hot years globally had an anthropogenic contribution to their probability of occurrence. Aerosol-induced cooling delays the timing of a significant human contribution to record-breaking events in some regions. Without human-induced climate change recent hot summers and years would be very unlikely to have occurred.111411Ysciescopu

    Excess pressure as an analogue of blood flow velocity

    Get PDF
    INTRODUCTION: Derivation of blood flow velocity from a blood pressure waveform is a novel technique, which could have potential clinical importance. Excess pressure, calculated from the blood pressure waveform via the reservoir-excess pressure model, is purported to be an analogue of blood flow velocity but this has never been examined in detail, which was the aim of this study. METHODS: Intra-arterial blood pressure was measured sequentially at the brachial and radial arteries via fluid-filled catheter simultaneously with blood flow velocity waveforms recorded via Doppler ultrasound on the contralateral arm (n = 98, aged 61 ± 10 years, 72% men). Excess pressure was derived from intra-arterial blood pressure waveforms using pressure-only reservoir-excess pressure analysis. RESULTS: Brachial and radial blood flow velocity waveform morphology were closely approximated by excess pressure derived from their respective sites of measurement (median cross-correlation coefficient r = 0.96 and r = 0.95 for brachial and radial comparisons, respectively). In frequency analyses, coherence between blood flow velocity and excess pressure was similar for brachial and radial artery comparisons (brachial and radial median coherence = 0.93 and 0.92, respectively). Brachial and radial blood flow velocity pulse heights were correlated with their respective excess pressure pulse heights (r = 0.53, P < 0.001 and r = 0.43, P < 0.001, respectively). CONCLUSION: Excess pressure is an analogue of blood flow velocity, thus affording the opportunity to derive potentially important information related to arterial blood flow using only the blood pressure waveform

    Using Electronic Technology to Improve Clinical Care -- Results from a Before-after Cluster Trial to Evaluate Assessment and Classification of Sick Children According to Integrated Management of Childhood Illness (IMCI) Protocol in Tanzania.

    Get PDF
    Poor adherence to the Integrated Management of Childhood Illness (IMCI) protocol reduces the potential impact on under-five morbidity and mortality. Electronic technology could improve adherence; however there are few studies demonstrating the benefits of such technology in a resource-poor settings. This study estimates the impact of electronic technology on adherence to the IMCI protocols as compared to the current paper-based protocols in Tanzania. In four districts in Tanzania, 18 clinics were randomly selected for inclusion. At each site, observers documented critical parts of the clinical assessment of children aged 2 months to 5 years. The first set of observations occurred during examination of children using paper-based IMCI (pIMCI) and the next set of observations occurred during examination using the electronic IMCI (eIMCI). Children were re-examined by an IMCI expert and the diagnoses were compared. A total of 1221 children (671 paper, 550 electronic) were observed. For all ten critical IMCI items included in both systems, adherence to the protocol was greater for eIMCI than for pIMCI. The proportion assessed under pIMCI ranged from 61% to 98% compared to 92% to 100% under eIMCI (p < 0.05 for each of the ten assessment items). Use of electronic systems improved the completeness of assessment of children with acute illness in Tanzania. With the before-after nature of the design, potential for temporal confounding is the primary limitation. However, the data collection for both phases occurred over a short period (one month) and so temporal confounding was expected to be minimal. The results suggest that the use of electronic IMCI protocols can improve the completeness and consistency of clinical assessments and future studies will examine the long-term health and health systems impact of eIMCI

    A modern multicentennial record of radiocarbon variability from an exactly dated bivalve chronology at the Tree Nob site (Alaska coastal current)

    Get PDF
    This is the final version. Available from Cambridge University Press via the DOI in this record. Quantifying the marine radiocarbon reservoir effect, offsets (ΔR), and ΔR variability over time is critical to improving dating estimates of marine samples while also providing a proxy of water mass dynamics. In the northeastern Pacific, where no high-resolution time series of ΔR has yet been established, we sampled radiocarbon (14C) from exactly dated growth increments in a multicentennial chronology of the long-lived bivalve, Pacific geoduck (Paneopea generosa) at the Tree Nob site, coastal British Columbia, Canada. Samples were taken at approximately decadal time intervals from 1725 CE to 1920 CE and indicate average ΔR values of 256 ± 22 years (1σ) consistent with existing discrete estimates. Temporal variability in ΔR is small relative to analogous Atlantic records except for an unusually old-water event, 1802–1812. The correlation between ΔR and sea surface temperature (SST) reconstructed from geoduck increment width is weakly significant (r2 = .29, p = .03), indicating warm water is generally old, when the 1802–1812 interval is excluded. This interval contains the oldest (–2.1σ) anomaly, and that is coincident with the coldest (–2.7σ) anomalies of the temperature reconstruction. An additional 32 14C values spanning 1952–1980 were detrended using a northeastern Pacific bomb pulse curve. Significant positive correlations were identified between the detrended 14C data and annual El Niño Southern Oscillation (ENSO) and summer SST such that cooler conditions are associated with older water. Thus, 14C is generally relatively stable with weak, potentially inconsistent associations to climate variables, but capable of infrequent excursions as illustrated by the unusually cold, old-water 1802–1812 interval.National Science FoundationNational Science Foundatio

    Determinants of medication adherence to antihypertensive medications among a Chinese population using Morisky medication adherence scale

    Get PDF
    &lt;b&gt;Background and objectives&lt;/b&gt; Poor adherence to medications is one of the major public health challenges. Only one-third of the population reported successful control of blood pressure, mostly caused by poor drug adherence. However, there are relatively few reports studying the adherence levels and their associated factors among Chinese patients. This study aimed to study the adherence profiles and the factors associated with antihypertensive drug adherence among Chinese patients.&lt;p&gt;&lt;/p&gt; &lt;b&gt;Methods&lt;/b&gt; A cross-sectional study was conducted in an outpatient clinic located in the New Territories Region of Hong Kong. Adult patients who were currently taking at least one antihypertensive drug were invited to complete a self-administered questionnaire, consisting of basic socio-demographic profile, self-perceived health status, and self-reported medication adherence. The outcome measure was the Morisky Medication Adherence Scale (MMAS-8). Good adherence was defined as MMAS scores greater than 6 points (out of a total score of 8 points).&lt;p&gt;&lt;/p&gt; &lt;b&gt;Results&lt;/b&gt; From 1114 patients, 725 (65.1%) had good adherence to antihypertensive agents. Binary logistic regression analysis was conducted. Younger age, shorter duration of antihypertensive agents used, job status being employed, and poor or very poor self-perceived health status were negatively associated with drug adherence.&lt;p&gt;&lt;/p&gt; &lt;b&gt;Conclusion&lt;/b&gt; This study reported a high proportion of poor medication adherence among hypertensive subjects. Patients with factors associated with poor adherence should be more closely monitored to optimize their drug taking behavior

    Theory of Star Formation

    Full text link
    We review current understanding of star formation, outlining an overall theoretical framework and the observations that motivate it. A conception of star formation has emerged in which turbulence plays a dual role, both creating overdensities to initiate gravitational contraction or collapse, and countering the effects of gravity in these overdense regions. The key dynamical processes involved in star formation -- turbulence, magnetic fields, and self-gravity -- are highly nonlinear and multidimensional. Physical arguments are used to identify and explain the features and scalings involved in star formation, and results from numerical simulations are used to quantify these effects. We divide star formation into large-scale and small-scale regimes and review each in turn. Large scales range from galaxies to giant molecular clouds (GMCs) and their substructures. Important problems include how GMCs form and evolve, what determines the star formation rate (SFR), and what determines the initial mass function (IMF). Small scales range from dense cores to the protostellar systems they beget. We discuss formation of both low- and high-mass stars, including ongoing accretion. The development of winds and outflows is increasingly well understood, as are the mechanisms governing angular momentum transport in disks. Although outstanding questions remain, the framework is now in place to build a comprehensive theory of star formation that will be tested by the next generation of telescopes.Comment: 120 pages, to appear in ARAA. No changes from v1 text; permission statement adde

    Approaches for estimating minimal clinically important differences in systemic lupus erythematosus.

    Get PDF
    A minimal clinically important difference (MCID) is an important concept used to determine whether a medical intervention improves perceived outcomes in patients. Prior to the introduction of the concept in 1989, studies focused primarily on statistical significance. As most recent clinical trials in systemic lupus erythematosus (SLE) have failed to show significant effects, determining a clinically relevant threshold for outcome scores (that is, the MCID) of existing instruments may be critical for conducting and interpreting meaningful clinical trials as well as for facilitating the establishment of treatment recommendations for patients. To that effect, methods to determine the MCID can be divided into two well-defined categories: distribution-based and anchor-based approaches. Distribution-based approaches are based on statistical characteristics of the obtained samples. There are various methods within the distribution-based approach, including the standard error of measurement, the standard deviation, the effect size, the minimal detectable change, the reliable change index, and the standardized response mean. Anchor-based approaches compare the change in a patient-reported outcome to a second, external measure of change (that is, one that is more clearly understood, such as a global assessment), which serves as the anchor. Finally, the Delphi technique can be applied as an adjunct to defining a clinically important difference. Despite an abundance of methods reported in the literature, little work in MCID estimation has been done in the context of SLE. As the MCID can help determine the effect of a given therapy on a patient and add meaning to statistical inferences made in clinical research, we believe there ought to be renewed focus on this area. Here, we provide an update on the use of MCIDs in clinical research, review some of the work done in this area in SLE, and propose an agenda for future research

    The impact of eHealth on the quality and safety of health care: a systematic overview

    Get PDF
    Background There is considerable international interest in exploiting the potential of digital solutions to enhance the quality and safety of health care. Implementations of transformative eHealth technologies are underway globally, often at very considerable cost. In order to assess the impact of eHealth solutions on the quality and safety of health care, and to inform policy decisions on eHealth deployments, we undertook a systematic review of systematic reviews assessing the effectiveness and consequences of various eHealth technologies on the quality and safety of care. Methods and Findings We developed novel search strategies, conceptual maps of health care quality, safety, and eHealth interventions, and then systematically identified, scrutinised, and synthesised the systematic review literature. Major biomedical databases were searched to identify systematic reviews published between 1997 and 2010. Related theoretical, methodological, and technical material was also reviewed. We identified 53 systematic reviews that focused on assessing the impact of eHealth interventions on the quality and/or safety of health care and 55 supplementary systematic reviews providing relevant supportive information. This systematic review literature was found to be generally of substandard quality with regards to methodology, reporting, and utility. We thematically categorised eHealth technologies into three main areas: (1) storing, managing, and transmission of data; (2) clinical decision support; and (3) facilitating care from a distance. We found that despite support from policymakers, there was relatively little empirical evidence to substantiate many of the claims made in relation to these technologies. Whether the success of those relatively few solutions identified to improve quality and safety would continue if these were deployed beyond the contexts in which they were originally developed, has yet to be established. Importantly, best practice guidelines in effective development and deployment strategies are lacking. Conclusions There is a large gap between the postulated and empirically demonstrated benefits of eHealth technologies. In addition, there is a lack of robust research on the risks of implementing these technologies and their cost-effectiveness has yet to be demonstrated, despite being frequently promoted by policymakers and “techno-enthusiasts” as if this was a given. In the light of the paucity of evidence in relation to improvements in patient outcomes, as well as the lack of evidence on their cost-effectiveness, it is vital that future eHealth technologies are evaluated against a comprehensive set of measures, ideally throughout all stages of the technology's life cycle. Such evaluation should be characterised by careful attention to socio-technical factors to maximise the likelihood of successful implementation and adoption
    corecore